2,797 research outputs found

    Statistical Image Reconstruction for High-Throughput Thermal Neutron Computed Tomography

    Full text link
    Neutron Computed Tomography (CT) is an increasingly utilised non-destructive analysis tool in material science, palaeontology, and cultural heritage. With the development of new neutron imaging facilities (such as DINGO, ANSTO, Australia) new opportunities arise to maximise their performance through the implementation of statistically driven image reconstruction methods which have yet to see wide scale application in neutron transmission tomography. This work outlines the implementation of a convex algorithm statistical image reconstruction framework applicable to the geometry of most neutron tomography instruments with the aim of obtaining similar imaging quality to conventional ramp filtered back-projection via the inverse Radon transform, but using a lower number of measured projections to increase object throughput. Through comparison of the output of these two frameworks using a tomographic scan of a known 3 material cylindrical phantom obtain with the DINGO neutron radiography instrument (ANSTO, Australia), this work illustrates the advantages of statistical image reconstruction techniques over conventional filter back-projection. It was found that the statistical image reconstruction framework was capable of obtaining image estimates of similar quality with respect to filtered back-projection using only 12.5% the number of projections, potentially increasing object throughput at neutron imaging facilities such as DINGO eight-fold

    The behavior of metropolis-coupled Markov chains when sampling rugged phylogenetic distributions

    Get PDF
    Ā© The Author(s) 2018. Bayesian phylogenetic inference relies on the use of Markov chain Monte Carlo (MCMC) to provide numerical approximations of high-dimensional integrals and estimate posterior probabilities. However, MCMC performs poorly when posteriors are very rugged (i.e., regions of high posterior density are separated by regions of low posterior density). One technique that has become popular for improving numerical estimates from MCMC when distributions are rugged is Metropolis coupling (MC3). InMC3, additional chains are employed to sample flattened transformations of the posterior and improve mixing. Here, we highlight several underappreciated behaviors of MC3. Notably, estimated posterior probabilities may be incorrect but appear to converge, when individual chains do not mixwell, despite different chains sampling trees from all relevant areas in tree space. Counter intuitively, such behavior can be more difficult to diagnose with increased numbers of chains. We illustrate these surprising behaviors of MC3 using a simple, non-phylogenetic example and phylogenetic examples involving both constrained and unconstrained analyses. To detect and mitigate the effects of these behaviors, we recommend increasing the number of independent analyses and varying the temperature of the hottest chain in current versions of Bayesian phylogenetic software. Convergence diagnostics based on the behavior of the hottest chain may also help detect these behaviors and could form a useful addition to future software releases

    On the Need for New Measures of Phylogenomic Support

    Get PDF
    The scale of data sets used to infer phylogenies has grown dramatically in the last decades, providing researchers with an enormous amount of information with which to draw inferences about evolutionary history. However, standard approaches to assessing confidence in those inferences (e.g., nonparametric bootstrap proportions [BP] and Bayesian posterior probabilities [PPs]) are still deeply influenced by statistical procedures and frameworks that were developed when information was much more limited. These approaches largely quantify uncertainty caused by limited amounts of data, which is often vanishingly small with modern, genome-scale sequence data sets. As a consequence, today\u27s phylogenomic studies routinely report near-complete confidence in their inferences, even when different studies reach strongly conflicting conclusions and the sites and loci in a single data set contain much more heterogeneity than our methods assume or can accommodate. Therefore, we argue that BPs and marginal PPs of bipartitions have outlived their utility as the primary means of measuring phylogenetic support for modern phylogenomic data sets with large numbers of sites relative to the number of taxa. Continuing to rely on these measures will hinder progress towards understanding remaining sources of uncertainty in the most challenging portions of the Tree of Life. Instead, we encourage researchers to examine the ideas and methods presented in this special issue of Systematic Biology and to explore the area further in their own work. The papers in this special issue outline strategies for assessing confidence and uncertainty in phylogenomic data sets that move beyond stochastic error due to limited data and offer promise for more productive dialogue about the challenges that we face in reaching our shared goal of understanding the history of life on Earth.[Big data; gene tree variation; genomic era; statistical bias.

    A local effect model-based interpolation framework for experimental nanoparticle radiosensitisation data

    Get PDF
    A local effect model (LEM)-based framework capable of interpolating nanoparticle-enhanced photon-irradiated clonogenic cell survival fraction measurements as a function of nanoparticle concentration was developed and experimentally benchmarked for gold nanoparticle (AuNP)-doped bovine aortic endothelial cells (BAECs) under superficial kilovoltage X-ray irradiation. For three different superficial kilovoltage X-ray spectra, the BAEC survival fraction response was predicted for two different AuNP concentrations and compared to experimental data. The ability of the developed framework to predict the cell survival fraction trends is analysed and discussed. This developed framework is intended to fill in the existing gaps of individual cell line response as a function of NP concentration under photon irradiation and assist the scientific community in planning future pre-clinical trials of high Z nanoparticle-enhanced photon radiotherapy

    Garbage collection auto-tuning for Java MapReduce on Multi-Cores

    Get PDF
    MapReduce has been widely accepted as a simple programming pattern that can form the basis for efficient, large-scale, distributed data processing. The success of the MapReduce pattern has led to a variety of implementations for different computational scenarios. In this paper we present MRJ, a MapReduce Java framework for multi-core architectures. We evaluate its scalability on a four-core, hyperthreaded Intel Core i7 processor, using a set of standard MapReduce benchmarks. We investigate the significant impact that Java runtime garbage collection has on the performance and scalability of MRJ. We propose the use of memory management auto-tuning techniques based on machine learning. With our auto-tuning approach, we are able to achieve MRJ performance within 10% of optimal on 75% of our benchmark tests

    P\u3csup\u3e3\u3c/sup\u3e: Phylogenetic posterior prediction in RevBayes

    Get PDF
    Ā© The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. Tests of absolute model fit are crucial in model-based inference because poorly structured models can lead to biased parameter estimates. In Bayesian inference, posterior predictive simulations can be used to test absolute model fit. However, such tests have not been commonly practiced in phylogenetic inference due to a lack of convenient and flexible software. Here, we describe our newly implemented tests of model fit using posterior predictive testing, based on both data- and inference-based test statistics, in the phylogenetics software RevBayes. This new implementation makes a large spectrum of models available for use through a user-friendly and flexible interface
    • ā€¦
    corecore